97 results
71 Treatment with TMS Improves Aspects of Attention in Depression: A Pilot Study
- Nicole C Walker, Nathan Ramirez, Laurie Chin, Sonia S Rehman, Stephanie C Gee, Kathleen Hodges, Leanne M Williams, Robert Hickson, L. Chauncey Green, Talaya Patton, Hanaa Aldasouqi, Noah S Philip, F. Andrew Kozel, Jerome A Yesavage, Michelle R Madore
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 476-477
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Repetitive transcranial magnetic stimulation (TMS) is an evidenced based treatment for adults with treatment resistant depression (TRD). The standard clinical protocol for TMS is to stimulate the left dorsolateral prefrontal cortex (DLPFC). Although the DLPFC is a defining region in the cognitive control network of the brain and implicated in executive functions such as attention and working memory, we lack knowledge about whether TMS improves cognitive function independent of depression symptoms. This exploratory analysis sought to address this gap in knowledge by assessing changes in attention before and after completion of a standard treatment with TMS in Veterans with TRD.
Participants and Methods:Participants consisted of 7 Veterans (14.3% female; age M = 46.14, SD = 7.15; years education M = 16.86, SD = 3.02) who completed a full 30-session course of TMS treatment and had significant depressive symptoms at baseline (Patient Health Questionnaire-9; PHQ-9 score >5). Participants were given neurocognitive assessments measuring aspects of attention [Wechsler Adult Intelligence Scale 4th Edition (WAIS-IV) subtests: Digits Forward, Digits Backward, and Number Sequencing) at baseline and again after completion of TMS treatment. The relationship between pre and post scores were examined using paired-samples t-test for continuous variables and a linear regression to covary for depression and posttraumatic stress disorder (PTSD), which is often comorbid with depression in Veteran populations.
Results:There was a significant improvement in Digit Span Forward (p=.01, d=-.53), but not Digit Span Backward (p=.06) and Number Sequencing (p=.54) post-TMS treatment. Depression severity was not a significant predictor of performance on Digit Span Forward (f(1,5)=.29, p=.61) after TMS treatment. PTSD severity was also not a significant predictor of performance on Digit Span Forward (f(1,5)=1.31, p=.32).
Conclusions:Findings suggested that a standard course of TMS improves less demanding measures of working memory after a full course of TMS, but possibly not the more demanding aspects of working memory. This improvement in cognitive function was independent of improvements in depression and PTSD symptoms. Further investigation in a larger sample and with direct neuroimaging measures of cognitive function is warranted.
Assessing the reliability and cross-sectional and longitudinal validity of fifteen bioelectrical impedance analysis devices
- Madelin R. Siedler, Christian Rodriguez, Matthew T. Stratton, Patrick S. Harty, Dale S. Keith, Jacob J. Green, Jake R. Boykin, Sarah J. White, Abegale D. Williams, Brielle DeHaven, Grant M. Tinsley
-
- Journal:
- British Journal of Nutrition / Volume 130 / Issue 5 / 14 September 2023
- Published online by Cambridge University Press:
- 21 November 2022, pp. 827-840
- Print publication:
- 14 September 2023
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
The purpose of this investigation was to expand upon the limited existing research examining the test–retest reliability, cross-sectional validity and longitudinal validity of a sample of bioelectrical impedance analysis (BIA) devices as compared with a laboratory four-compartment (4C) model. Seventy-three healthy participants aged 19–50 years were assessed by each of fifteen BIA devices, with resulting body fat percentage estimates compared with a 4C model utilising air displacement plethysmography, dual-energy X-ray absorptiometry and bioimpedance spectroscopy. A subset of thirty-seven participants returned for a second visit 12–16 weeks later and were included in an analysis of longitudinal validity. The sample of devices included fourteen consumer-grade and one research-grade model in a variety of configurations: hand-to-hand, foot-to-foot and bilateral hand-to-foot (octapolar). BIA devices demonstrated high reliability, with precision error ranging from 0·0 to 0·49 %. Cross-sectional validity varied, with constant error relative to the 4C model ranging from −3·5 (sd 4·1) % to 11·7 (sd 4·7) %, standard error of the estimate values of 3·1–7·5 % and Lin’s concordance correlation coefficients (CCC) of 0·48–0·94. For longitudinal validity, constant error ranged from −0·4 (sd 2·1) % to 1·3 (sd 2·7) %, with standard error of the estimate values of 1·7–2·6 % and Lin’s CCC of 0·37–0·78. While performance varied widely across the sample investigated, select models of BIA devices (particularly octapolar and select foot-to-foot devices) may hold potential utility for the tracking of body composition over time, particularly in contexts in which the purchase or use of a research-grade device is infeasible.
2201 A multi-stakeholder analysis on preparing future pediatricians to improve the mental health of children
- Cori M. Green, John Walkup, William Trochim
-
- Journal:
- Journal of Clinical and Translational Science / Volume 2 / Issue S1 / June 2018
- Published online by Cambridge University Press:
- 21 November 2018, p. 78
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/SPECIFIC AIMS: (1) Develop a concept map of ideas from diverse stakeholders on how to best improve training programs. (2) Assess the degree of consensus amongst stakeholders regarding importance and feasibility. (3) Identify which ideas are both important and feasible to inform policy and curricular interventions. METHODS/STUDY POPULATION: Concept mapping is a 4 step approach to data gathering and analysis. (1) Stakeholders [pediatricians (peds), MH professionals (MHPs), trainees, parents] were recruited to brainstorm ideas in response to this prompt: “To prepare future pediatricians for their role in caring for children and adolescents with mental and behavioral health conditions, residency training needs to...”. (2) Content analysis was used to edit and synthesize ideas. (3) A subgroup of stakeholders sorted ideas into groups and rated for importance and feasibility. (4) A large group of anonymous participants rated ideas for importance and feasibility. Multidimensional scaling and hierarchical cluster analysis grouped ideas into clusters. Average importance and feasibility were calculated for each cluster and were compared statistically in each cluster and between subgroups. Bivariate plots were created to show the relative importance and feasibility of each idea. The “Go-Zone” is where statements are feasible and important and can drive action planning. RESULTS/ANTICIPATED RESULTS: Content analysis was applied to 497 ideas resulting in 99 that were sorted by 40 stakeholders and resulted in 7 clusters: Modalities, Prioritization of MH, Systems-Based, Self-Awareness/Relationship Building, Clinical Assessment, Treatment, and Diagnosis Specific Skills. In total, 216 participants rated statements for importance, 209 for feasibility: 17% MHPs, 82% peds, 55% trainees. There was little correlation between importance and feasibility for each cluster. Compared with peds, MHPs rated Modalities, and Prioritization of MH higher in importance and Prioritization of MH as more feasible, but Treatment less feasible. Trainees rated 5 of 7 clusters higher in importance and all clusters more feasible than established practitioners. DISCUSSION/SIGNIFICANCE OF IMPACT: Statements deemed feasible and important should drive policy changes and curricular development. Innovation is needed to make important ideas more feasible. Differences between importance and feasibility in each cluster and between stakeholders need to be addressed to help training programs evolve.
Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes
- Part of
- I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro
-
- Journal:
- Publications of the Astronomical Society of Australia / Volume 34 / 2017
- Published online by Cambridge University Press:
- 20 December 2017, e069
-
- Article
-
- You have access Access
- HTML
- Export citation
-
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.
Enhancing the Biological Activity of Nicosulfuron with pH Adjusters
- Jerry M. Green, William R. Cahill
-
- Journal:
- Weed Technology / Volume 17 / Issue 2 / June 2003
- Published online by Cambridge University Press:
- 20 January 2017, pp. 338-345
-
- Article
- Export citation
-
Adjuvants that increase the pH of the spray mixture and solubilize nicosulfuron can enhance biological activity under specific conditions. These conditions include high nicosulfuron rates, difficult-to-control weeds, low spray volumes, and initially acidic spray conditions. The most effective pH adjusters are tribasic potassium phosphate, sodium carbonate, and triethanolamine. In low spray volumes, these adjusters make the spray mixture alkaline and often enhance the activity of nicosulfuron on common cocklebur and large crabgrass. Alkaline conditions rapidly dissolve the sulfonylurea particles and enhance activity with crop oil concentrate, modified seed oil, and hydrophilic nonionic surfactants. pH adjusters did not enhance activity with lipophilic surfactants. Ammonium sulfate slightly increases the pH of spray mixtures and increases nicosulfuron activity depending on species, adjuvant type, and pH adjuster. These results generally support the concept that herbicide solubilization is necessary to maximize the foliar activity.
14 - Nested logit estimation
- from Part III - The suite of choice models
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 560-600
-
- Chapter
- Export citation
-
Summary
In mathematics you don’t understand things. You just get used to them.
(John von Neumann, 1903–57)Introduction
The majority of practical choice study applications do not progress beyond the simple multinomial logit (MNL) model discussed in previous chapters. The ease of computation, and the wide availability of software packages capable of estimating the MNL model, suggest that this trend will continue. The ease with which the MNL model may be estimated, however, comes at a price in the form of the assumption of Independence of Identically Distributed (IID) error components. While the IID assumption and the behaviorally comparable assumption of Independence of Irrelevant Alternatives (IIA) allow for ease of computation (as well as providing a closed form solution), as with any assumption violations both can and do occur. When violations do occur, the cross-substitution effects (or correlation) observed between pairs of alternatives are no longer equal given the presence or absence of other alternatives within the complete list of available alternatives in the model (Louviere et al. 2000).
The nested logit (NL) model represents a partial relaxation of the IID and IIA assumptions of the MNL model. As discussed in Chapter 4, this relaxation occurs in the variance components of the model, together with some correlation within sub-sets of alternatives, and while more advanced models such as mixed multinomial logit (see Chapter 15) relax the IID assumption more fully, the NL model represents an excellent advancement for the analyst in terms of studies of choice. As with the MNL model, the NL model is relatively straightforward to estimate and offers the added benefit of being a closed-form solution. More advanced models relax the IID assumption in terms of the covariances; however, all are of open-form solution and as such require complex analytical calculations to identify changes in the choice probabilities through varying levels of attributes (see Louviere et al. (2000) and Train (2003, 2009), as well as the following chapters in this book). In this chapter, we show how to use NLOGIT to estimate NL models and to interpret the output, especially the output that is additional to what is obtained when estimating an MNL model. As with previous chapters, we have been very specific in terms of our explanation of the command syntax as well as the output generated.
Contents
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp v-xvi
-
- Chapter
- Export citation
15 - Mixed logit estimation
- from Part III - The suite of choice models
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 601-705
-
- Chapter
- Export citation
-
Summary
The secret of greatness is simple: do better work than any other man in your field – and keep on doing it.
(Wilfred A. Peterson)Introduction
The choice modeler has available a number of econometric models. Traditionally, the more common models applied to choice data are the multinomial logit (MNL) and nested logit (NL) models. Increasingly, however, choice modelers are estimating the mixed logit (ML) or random parameters logit model. In Chapter 4, we outlined the theory behind this class of models. In this chapter we estimate a range of ML models using Nlogit, including recent developments in scaled mixed logit (or generalized mixed logit). As with Chapters 11 and 13 (MNL model) and Chapter 14 (NL model), we explain in detail the commands necessary to estimate ML models as well as the interpretation of the output generated by Nlogit. An understanding of the theory behind the ML model is presented in Chapter 4; however we anticipate that in reading this chapter you will have a better understanding of the model, at least from an empirical standpoint.
The mixed logit model basic commands
The ML model syntax commands build on the commands of the MNL model discussed in Chapter 11. We begin with the basic ML syntax command, building upon this in later sections as we add to the complexity of the ML model.
21 - Attribute processing, heuristics, and preference construction
- from Part IV - Advanced topics
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 937-1071
-
- Chapter
- Export citation
-
Summary
This chapter was co-authored with Waiyan Leong and Andrew Collins.
Introduction
Any economic decision or judgment has an associated, often subconscious, psychological process prodding it along, in ways that makes the “neoclassical ambition of avoiding [this] necessity ….unrealizable” (Simon 1978, 507). The translation of this fundamental statement on human behavior has become associated with the identification of the heuristics that individuals use to simplify preference construction and hence make choices, or to make the representation of what matters relevant, regardless of the degree of complexity as perceived by the decision maker and/or the analyst. Despite this recognition in behavioral research as long ago as the 1950s (see Svenson 1998), that cognitive processes have a key role in preference revelation, and the reminders throughout the literature (see McFadden 2001b; Yoon and Simonson 2008) about rule-driven behavior, we still see relatively little of the decision processing literature incorporated into discrete choice modeling which is, increasingly, becoming the mainstream empirical context for preference measurement and willingness to pay (WTP) derivatives.
There is an extensive literature focussing on these matters that might broadly be described as heuristics and biases, and which is crystallized in the notion of process, in contrast to outcome.
List of Tables
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp xxii-xxviii
-
- Chapter
- Export citation
Preface
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp xxix-xxx
-
- Chapter
- Export citation
-
Summary
I’m all in favor of keeping dangerous weapons out of the hands of fools. Let’s start with typewriters.
(Frank Lloyd Wright 1868–1959)Almost without exception, everything human beings undertake involves a choice (consciously or subconsciously), including the choice not to choose. Some choices are the result of habit while others are fresh decisions made with great care, based on whatever information is available at the time from past experiences and/or current inquiry.
Over the last forty years, there has been a steadily growing interest in the development and application of quantitative statistical methods to study choices made by individuals (and, to a lesser extent, groups of individuals or organizations). With an emphasis on both understanding how choices are made and on forecasting future choice responses, a healthy literature has evolved. Reference works by Louviere et al. (2000) and Train (2003, 2009) synthesize the contributions. However while these sources represent the state of the art (and practice), they are technically advanced and often a challenge for both the beginner and practitioners.
3 - Choice and utility
- from Part I - Getting started
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 30-79
-
- Chapter
- Export citation
-
Summary
To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of.
(Ronald Fisher)Introduction
As seen in Chapter 2, individual preferences, subject to any constraints faced by those operating in a market, will give rise to choices. These choices in the aggregate sum to represent the total demand for various goods and services within that market. Rather than attempt to model demand based on aggregate level data, discrete choice models seek to model demand using disaggregate level data. Note that this does not necessarily mean that different discrete choice models are estimated for each individual, although some researchers do attempt such feats (e.g., Louviere et al. 2008). Rather, models dealing with aggregate level demand data typically work with variables where each data point represents the amount of some good or service sold at a specific point in time, whereas discrete choice models are typically applied to data where each data point represents an individual choice situation, where the sum of the choices combine to produce information about overall demand.
Importantly, to be able to refer to “demand” we have to allow for the presence of a “no choice,” since some goods and services are not consumed by an individual. Throughout this chapter and the rest of the book, we will refer to both choice and demand, and treat them as interchangeable words. In doing so, we also recognize the broader context within which discrete choice models can be used, that often distinguishes between discrete choice models and a complete system of demand models, the latter at a more aggregate economy wide level, in contrast to discrete choice models that are most commonly applied at a sectoral level (e.g., transport or health). Truong and Hensher (2012), among others, develop the theoretical linkages between discrete choice models and continuous choice models, where discrete choice models focus on the structure of tastes or preferences at the individual level, while continuous demand models can be used to describe the interactions between these preferences at the industry or sectoral level, extendable to an entire economy.
7 - Statistical inference
- from Part I - Getting started
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 320-359
-
- Chapter
- Export citation
-
Summary
Introduction
This chapter will discuss some issues in statistical inference in the analysis of choice models. We are concerned with two kinds of computations, hypothesis tests and variance estimation. To illustrate the analyses, we will work through an example based on a revealed preference (RP) data set. In this chapter, we present syntax and output generated using Nlogit to demonstrate the concepts covered. The syntax and output is, for the more familiar reader, largely self-explanatory; however, for the less familiar reader, we refer you to Chapter 11, which you may wish to read before going further. The multinomial logit model for the study is shown in the following Nlogit set up which gives the utility functions for four travel modes: bus, train, busway and car, respectively:
;Model:
u(bs) = bs + actpt*act + invcpt*invc + invtpt*invt2 + egtpt*egt + trpt*trnf /
u(tn) = tn + actpt*act + invcpt*invc + invtpt*invt2 + egtpt*egt + trpt*trnf /
u(bw) = bw + actpt*act + invcpt*invc + invtpt*invt2 + egtpt*egt + trpt*trnf /
u(cr) = TC*TC + PC*PC + invtcar*invt + egtcar*egt
The attributes are act = access time, invc = in vehicle cost, invt2 = in vehicle time, egt = egress time, trnf = transfer wait time, tc = toll cost, pc = parking cost, and invt = in vehicle time for car. Where a particular example uses a method given in more detail in later chapters, we will provide a cross-reference.
22 - Group decision making
- from Part IV - Advanced topics
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 1072-1115
-
- Chapter
- Export citation
-
Summary
Introduction
The literature on household economics has made substantial progress in the study of group decision making, beginning with the initial theoretical contributions (Becker 1993; Browning and Chiappori 1998; Lampietti 1999; Chiuri 2000; Vermeulen 2002), and subsequent empirical applications in various fields, such as marketing (Arora and Allenby 1999; Adamowicz et al. 2005), transport (Brewer and Hensher 2000; Hensher et al. 2007), and environmental economics (Quiggin 1998; Smith and Houtven 1998; Bateman and Munroe 2005; Dosman and Adamowicz 2006). Recent studies, for example, provide evidence of substantial differences in taste intensities between domestic partners, and make an attempt at reconciling them with observed joint choices using power functions (Dosman and Adamowicz 2006; Beharry et al. 2009). The evidence collected so far indicates that, for some categories of decisions, the conventional practice of selecting one member of the couple as representative of the tastes of the entire household may be biased when compared with the preference estimates underlying joint deliberation by the same couple.
Despite the existence of an extensive literature on group decision making, synthesized in Dellaert et al. (1998) and Vermuelen (2002), there has been a limited focus on ways in which multiple agents have been recognized in the formalization of discrete choice models. This literature can broadly be divided into two. (i) a focus on the game playing between agents in a sequential choice process that involves initial preferences (with or without knowledge of the agent’s choice), followed by a process of feedback, review, and revision or maintenance of the initial preference. This approach endogenizes the preferences of other decision makers in the ultimate group decision. We call this interactive agency choice experiments (IACE), as developed initially by Hensher and detailed in Brewer and Hensher (2000) and Rose and Hensher (2004). (ii) studies that develop ways of establishing the influence and power of each agent in the joint choice outcome, which may or may not use an IACE framework. Puckett and Hensher (2006) review this literature, which is primarily in marketing and household economics and has, for example, been extended and implemented in the study of freight distribution chains by Hensher et al. (2008), to the study of partnering between bus operators and the regulator by Hensher and Knowles (2007) and, most recently, to the household purchase of alternative fueled vehicles by Hensher et al. (2011) and Beck et al. (2012).
6 - Experimental design and choice experiments
- from Part I - Getting started
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 189-319
-
- Chapter
- Export citation
-
Summary
As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
(Einstein 1921)This chapter was co-authored with Michiel Bliemer and Andrew Collins.
Introduction
This chapter might be regarded as a diversion from the main theme of discrete choice models and estimation; however, the popularity of stated choice (SC) data developed within a formal framework known as the “design of choice experiments” is sufficient reason to include one chapter on the topic, a topic growing in such interest that it justifies an entire book-length treatment. In considering the focus of this chapter (in contrast to the chapter in the first edition), we have decided to focus on three themes. The first is a broad synthesis of what is essentially experimental design in the context of data needs for choice analysis (essentially material edited from the first edition). The second is an overview in reasonable chronological order of the main developments in the literature on experimental design, drawing on the contribution of Rose and Bliemer (2014), providing an informative journey on the evolution of approaches that are used to varying degrees in the design and implementation of choice experiments. With the historical record in place, we then focus on a number of topics which we believe need to be given a more detailed treatment, which includes sample size issues, best–worst designs, and pivot designs. We draw on the key contributions in Rose and Bliemer (2012, 2013); Rose (2014); and Rose et al. (2008). We use Ngene (Choice Metrics 2012), a comprehensive tool that complements Nlogit5 and which has the capability to design the wide range of choice experiments discussed in this chapter, and to provide syntax for use in a few of the designs. We refer the reader to the Ngene manual for more details (www.choice-metrics.com/documentation.html).
18 - Ordered choices
- from Part III - The suite of choice models
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 804-835
-
- Chapter
- Export citation
-
Summary
Introduction
A growing number of empirical studies involves the assessment of influences on a choice among ordered discrete alternatives. Ordered logit and probit models are well known, including extensions to accommodate random parameters (RP) and heteroskedasticity in unobserved variance (see, e.g., Bhat and Pulugurtha 1998; Greene 2007). The ordered choice model allows for non-linear effects of any variable on the probabilities associated with each ordered level (see, e.g., Eluru et al. 2008). However, the traditional ordered choice model is potentially limited, behaviorally, in that it holds the threshold values to be fixed. This can lead to inconsistent (i.e., incorrect) estimates of the effects of variables. Extending the ordered choice random parameter model to account for threshold random heterogeneity, as well as underlying systematic sources of explanation for unobserved heterogeneity, is a logical extension in line with the growing interest in choice analysis in establishing additional candidate sources of observed and unobserved taste heterogeneity.
A substantive application here is used to illustrate the behavioral gains from generalizing the ordered choice model to accommodate random thresholds in the presence of RP. It is focussed on the influences on the role that a specific attribute processing strategy, of preserving each attribute or ignoring it, plays when choosing among unlabeled attribute packages of alternative tolled and non-tolled routes for the commuting trip in a stated choice experiment (see Hensher 2001a, 2004, 2008). The ordering represents the number of attributes attended to from the full set. Despite a growing number of studies focussing on these issues (see, e.g., Cantillo et al. 2006; Hensher 2006; Swait 2001; Campbell et al. 2008), the entire domain of every attribute is treated as relevant to some degree, and included in the utility expressions for every individual. While acknowledging the extensive study of non-linearity in attribute specification, which permits varying marginal (dis)utility over an attribute’s range, including account for asymmetric preferences under conditions of gain and loss (see Hess et al. 2008), this is not the same as establishing ex ante the extent to which a specific attribute might be totally excluded from consideration for all manner of reasons, including the influence of the design of a choice experiment when stated choice data is being used.
13 - Getting more from your model
- from Part III - The suite of choice models
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 492-559
-
- Chapter
- Export citation
-
Summary
Where facts are few, experts are many.
(Donald R. Gannon)Introduction
In Chapter 11 we presented the standard output generated by Nlogit for the multinomial logit (MNL) choice model. By the addition of supplementary commands to the basic command syntax, the analyst is able to generate further output to aid in an understanding of choice. We present some of these additional commands now. As before, we demonstrate how the command syntax should appear and detail line by line how to interpret the output. The revealed preference (RP) data in the North West travel choice data set is used to illustrate the set of commands and outputs.
The entire command set up and model output is given up front to make it easy for the reader to see at a glance the commands that are used in this chapter. The command set up has two choice models; the first is the MNL model estimated to obtain the standard set of parameter estimates as well as useful additional outputs such as elasticities, partial (or marginal effects) and prediction success; the second MNL model uses the parameter estimates from the first model to undertake “what if” analysis using ;simulation and ;scenario, that involves selecting the relevant alternatives and attributes you want to change to predict the absolute and relative change in the choice shares. Arc elasticities can be inferred from the scenario analysis, since it provides before and after choice shares associated with before and after attribute levels.
Frontmatter
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp i-iv
-
- Chapter
- Export citation
2 - Choosing
- from Part I - Getting started
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 16-29
-
- Chapter
- Export citation
-
Summary
As soon as questions of will or decision or reason or choice of action arise, human science is at a loss.
(Noam Chomsky, 1928 –)Introduction
Individuals are born traders. They consciously or subconsciously make decisions by comparing alternatives and selecting an action that we call a choice outcome. As simple as the observed outcome may be to the decision maker (i.e., the chooser), the analyst who is trying to explain this choice outcome through some captured data will never have available all the information required to be able to explain the choice outcome fully. This challenge becomes even more demanding as we study the population of individuals, since differences between individuals abound.
If the world of individuals could be represented by one person, then life for the analyst would be greatly simplified, because whatever choice response we elicit from that one person could be expanded to the population as a whole to get the overall number of individuals choosing a specific alternative. Unfortunately there is a huge amount of variability in the reasoning underlying decisions made by a population of individuals. This variability, often referred to as heterogeneity, is in the main not observed by the analyst. The challenge is to find ways of observing and hence measuring this variability, maximizing the amount of measured variability (or observed heterogeneity) and minimizing the amount of unmeasured variability (or unobserved heterogeneity). The main task of the choice analyst is to capture such information through data collection, and to recognize that any information not captured in the data (be it known but not measured, or simply unknown) is still relevant to an individual’s choice, and must somehow be included in the effort to explain choice behavior.
11 - Getting started modeling: the workhorse – multinomial logit
- from Part III - The suite of choice models
- David A. Hensher, University of Sydney, John M. Rose, University of Sydney, William H. Greene, New York University
-
- Book:
- Applied Choice Analysis
- Published online:
- 05 June 2015
- Print publication:
- 11 June 2015, pp 437-471
-
- Chapter
- Export citation
-
Summary
An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today.
(Laurance J. Peter 1919–90)Introduction
In this chapter we demonstrate, through the use of a labeled mode choice data set (summarized in Appendix 11A to this chapter), how to model choice data by means of Nlogit. In writing this chapter we have been very specific. We demonstrate line by line the commands necessary to estimate a model in Nlogit. We do likewise with the output, describing in detail what each line of output means in practical terms. Knowing that “one must learn to walk before one runs,” we begin with estimation of the most basic of choice models, the multinomial logit (MNL). We devote Chapter 12 to additional output that may be obtained for the basic MNL model and later chapters (especially Chapters 21–22) to more advanced models.
Modeling choice in Nlogit: the MNL command
The basic commands necessary for the estimation of choice models in Nlogit are as follows:
NLOGIT
;lhs = choice, cset, altij
;choices =<names of alternatives>
;Model:
U(alternative 1 name) = <utility function 1>/
U(alternative 2 name) = <utility function 2>/
…
U(alternative i name) = <utility function i>$
We will use this command syntax with the labeled mode choice data described in Chapter 10, shown here as:
Nlogit
;lhs = choice, cset, altij
;choices = bs,tn,bw,cr
;model:
u(bs) = bs + actpt*act + invcpt*invc + invtpt*invt2 + egtpt*egt + trpt*trnf /
u(tn) = tn + actpt*act + invcpt*invc + invtpt*invt2 + egtpt*egt + trpt*trnf /
u(bw) = bw + actpt*act + invcpt*invc + invtpt*invt2 + egtpt*egt + trpt*trnf /
u(cr) = invccr*invc + invtcar*invt + TC*TC + PC*PC + egtcar*egt $
While other command structures are possible (e.g., using RHS and RH2 instead of specifying the utility functions – we do not describe these here and refer the interested reader to Nlogit’s help references), the above format provides the analyst with the greatest flexibility in specifying choice models. It is for this reason that we use this command format over the other formats available.